Parallel processing is a method of processing tasks where the task is broken into fragments, each of which is handled by a separate processor within the computer. To be able to parallel process a task, therefore, a computer must have more than one processor which can run simultaneously. Parallel processing is different than multitasking, as parallel processing can treat various aspects of a single task simultaneously (and so process the whole task faster), whereas multitasking handles various tasks in usually sequential order. Large parallel computers can have thousands of separate processors. Such systems are especially helpful in simulation experiments which have a large number of independent fragments which must be modeled.
The single channel between one processor and memory in a single-processing machine has a limitation of speed because all the data must travel along that channel. In much the same way as adding lanes to a freeway decreases congestion (increasing speed), adding multiple processor paths can speed up processing time for parallel processed tasks. However, even parallel configurations have limits and, after a point, increasing speed is not a simple matter of adding processors to a system. Software must be engineered, therefore, to make more efficient use of the parallel architecture. With the software to exploit the benefits of parallel processing, many researchers feel that most computers in the future will rely heavily upon parallel processing formats.